criminal offence
French Prosecutors Raid X Offices and Summon Musk as U.K. Launches New Probe Into Grok
French prosecutors carried out a search on the offices of Elon Musk's social media platform X on Tuesday morning and summoned the billionaire owner to attend a hearing in April. Conducted by the cybercrime unit of the Paris prosecutor's office, along with the French national cyber unit and European Union police agency Europol, the search marks an escalation of the ongoing investigation into X over suspected abuse of algorithms, plus allegations related to deepfake images and wider concerns over posts generated by the platform's AI chatbot, Grok. The office said the search was carried out with "the objective of ultimately ensuring the compliance of the X platform with French law" and in particular, a focus on X's Grok, designed by xAI, which chief prosecutor Laure Beccuau says has led "to the dissemination of Holocaust denial content and sexually explicit deepfakes." Europol spokesperson Jan Op Gen Oorth is quoted as telling Associated Press that the police agency "is supporting the French authorities in this." Musk and former CEO of X, Linda Yaccarino, have both been summoned for "voluntary interviews" with French prosecutors on April 20.
- Europe > France (1.00)
- Europe > United Kingdom (0.52)
- North America > United States (0.30)
- Africa (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > France Government (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.59)
- Information Technology > Communications > Social Media (0.36)
UK to bring into force law to tackle Grok AI deepfakes this week
The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk's Grok AI chatbot. The Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images. Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person's consent, were not harmless images but weapons of abuse. The BBC has approached X for comment. It previously said: Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content..
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (15 more...)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.53)
- Leisure & Entertainment > Sports (0.43)
Elton John calls UK government 'absolute losers' over AI copyright plans
In an interview on BBC One's Sunday with Laura Kuenssberg programme, John said the government was on course to "rob young people of their legacy and their income", adding: "It's a criminal offence, I think. The government are just being absolute losers, and I'm very angry about it." Last week, Kyle was accused of being too close to big tech after analysis showed a sharp increase in his department's meetings with companies such as Google, Amazon, Apple and Meta since Labour won the election last July. John referred to a similar amendment that received peers' support last week, only to be removed by the government in the Commons, in a tit-for-tat process that threatens to mire the data bill. "It's criminal, in that I feel incredibly betrayed: the House of Lords did a vote, and it was more than two to one in our favour, the government just looked at it as if to say: 'Hmmm, well the old people … like me can afford it," said John.
- Government > Regional Government > Europe Government > United Kingdom Government (0.76)
- Media > Music (0.70)
How the UK's online safety bill aims to clean up the internet
Deepfakes, viral online challenges and protecting freedom of expression: the online safety bill sprawls across many corners of the internet and it's about to become official. The much-debated legislation is due to receive royal assent, and therefore become law, imminently. The purpose of the act is to make sure tech firms have the right moderating systems and processes in place to deal with harmful material. "This means a company cannot comply by chance," says Ben Packer, a partner at the law firm Linklaters. "It must have systems and processes in place to, for instance, minimise the length of time for which illegal content is present." Packer adds that if those systems and processes are deemed to be not up to scratch, platforms could be in breach of the act.
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > England (0.05)
- Europe > Ukraine (0.05)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.91)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.72)
The Artificial Intelligence and Data Act (AIDA) – Companion document
Artificial intelligence (AI) systems are poised to have a significant impact on the lives of Canadians and the operations of Canadian businesses. The AIDA represents an important milestone in implementing the Digital Charter and ensuring that Canadians can trust the digital technologies that they use every day. The design, development, and use of AI systems must be safe, and must respect the values of Canadians. The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses. The Government intends to build on this framework through an open and transparent regulatory development process. Consultations would be organized to gather input from a variety of stakeholders across Canada to ensure that the regulations achieve outcomes aligned with Canadian values. The global interconnectedness of the digital economy requires that the regulation of AI systems in the marketplace be coordinated internationally. Canada has drawn from and will work together with international partners – such as the European Union (EU), the United Kingdom, and the United States (US) – to align approaches, in order to ensure that Canadians are protected globally and that Canadian firms can be recognized internationally as meeting robust standards.
- Europe > United Kingdom (0.24)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.69)
- Government > Regional Government > North America Government (0.47)
How do "technical" design-choices made when building algorithmic decision-making tools for criminal justice authorities create constitutional dangers?
This two part paper argues that seemingly "technical" choices made by developers of machine-learning based algorithmic tools used to inform decisions by criminal justice authorities can create serious constitutional dangers, enhancing the likelihood of abuse of decision-making power and the scope and magnitude of injustice. Drawing on three algorithmic tools in use, or recently used, to assess the "risk" posed by individuals to inform how they should be treated by criminal justice authorities, we integrate insights from data science and public law scholarship to show how public law principles and more specific legal duties that are rooted in these principles, are routinely overlooked in algorithmic tool-building and implementation. We argue that technical developers must collaborate closely with public law experts to ensure that if algorithmic decision-support tools are to inform criminal justice decisions, those tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine, including respect for human rights, throughout the tool-building process.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > Wales (0.04)
- (11 more...)
- Law > Criminal Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Government > Regional Government > Europe Government > United Kingdom Government (0.46)
Government launches law review of self-driving cars
One of the biggest issues facing the introduction of self-driving cars is where blamed is placed if one is involved in an accident. Does the responsibility lie with the vehicle owner, the car manufacturer or the firm that's developed the autonomous driving software? This and more will be decided in the next three years, with the Government launching a new legal review to prepare the country for driverless cars hitting UK roads. Minsters have commissioned the study in order to'ensure the UK remains one of the best places in the world to develop, test and drive self-driving vehicles'. Roads minister Jesse Norman yesterday announced the start of the review by the Law Commission of England and Wales and the Scottish Law Commission that will examine any legal obstacles that might restrict the widespread introduction of self-driving vehicles and highlight the need for regulatory reforms.
- Europe > United Kingdom > Wales (0.26)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Law (1.00)
- (2 more...)